What is Optical Networking? | Enterprise Networking Planet

2022-08-26 20:58:57 By : Ms. Joyce Lu

Optical networking is a technology that uses light to transmit data between devices. It provides high bandwidth and low latency and has been the de facto standard for long-haul data communications for many years. Optical fiber is used for most of all long-distance voice and data traffic worldwide.

The history of optical networking is long, and as its services and use cases expand, trends in making it more flexible, intelligent, and efficient will continue to grow.

Optical networking uses light to transmit data over long distances, across far-flung networks. It is used in various applications, including fiber-optic communication and cable television.

Optical networking is important because it allows for high-speed data transmission over long distances. For example, optical networking ensures a New York user can access a Nairobi server as fast as the laws of physics allow.

The technology behind optical networking is based on the principle of total internal reflection. When light strikes the surface of a medium, such as a fiber-optic cable, some of the light is reflected off the surface. The angle at which the light is reflected depends on the properties of the medium and the angle of incidence (the angle at which the light hits the surface).

If the angle of incidence is greater than the critical angle, then all of the light will be reflected; this is called total internal reflection. Total internal reflection can be used to create an optical fiber, a type of glass or plastic that guides light along its length.

When light travels through an optical fiber, it undergoes total internal reflection many times, causing it to bounce off the walls of the fiber. This bouncing effect causes the light to travel in a zigzag pattern down the length of the fiber.

By carefully controlling the properties of the fiber, engineers can control how much light is reflected and how far it travels before being reflected again. This allows them to design optical fibers that can carry data over long distances without losing any information.

The optical network is made up of several components: optical fiber, transceivers, amplifiers, multiplexers, and optical switches.

Also see: Best IoT Platforms for Device Management

The optical fiber is the medium that carries the light signals. It is composed of several materials, including the following:

The core and cladding are typically made of glass, while the buffer coating is usually made of plastic.

A transceiver is a device that converts electrical signals into optical signals and vice versa and are typically implemented at the last mile of the connection. It is the interface between the optical network and the electronic devices that use it, such as computers and routers.

As the name suggests, an amplifier is a device that amplifies optical signals, so they can travel over long distances without losing strength. Amplifiers are placed along an optical fiber at regular intervals to boost the signal.

Multiplexers are simply devices that take multiple signals and combine them into a single signal. This is done by assigning each signal a different light wavelength, allowing the multiplexer to send multiple signals simultaneously down a single optical fiber without interference.

An optical switch is a device that routes optical signals from one fiber to another. Optical switches are used to control the flow of traffic in an optical network and are typically used in high-capacity networks.

Also see: 7 Enterprise Networking Challenges 

The history of optical networking began in the 1790s when French inventor Claude Chappe invented the optical semaphore telegraph, one of the earliest examples of an optical communication system.

Nearly a century later, in 1880, Alexander Graham Bell patented the Photophone , an optical telephone system. Although the Photophone was groundbreaking, Bell’s earlier invention, the telephone, was more practical and took tangible form. As a result, the Photophone never left the experimental stage.

It wasn’t until the 1920s that John Logie Baird in England and Clarence W. Hansell in the United States patented the idea of using arrays of hollow pipes or transparent rods to transmit images for television or facsimile systems.

In 1954, Dutch scientist Abraham Van Heel and British scientist Harold H. Hopkins each published scientific papers on imaging bundles of fibers. Hopkins focused on unclad fibers, while Van Heel concentrated solely on simple bundles of clad fibers — those with a transparent cladding layer of lower refractive index surrounding the bare fiber.

This protected the fiber reflection surface from outside distortion and significantly reduced interference between fibers. The development of the imaging bundle was an important step in the development of fiber optics. Protecting the fiber surfaces from outside interference allowed for more accurate transmission of light signals through the fibers.

By 1960, glass-clad fibers had a loss of about 1 decibel (dB) per meter, suitable for medical imaging but too high for communication. In 1961, Elias Snitzer of American Optical published a theoretical description of an optical fiber with a tiny core that could transport light via just one waveguide mode.

In 1964, Dr. Kao proposed 10 or 20 dB of light loss per kilometer. This standard helped to improve the range and reliability of long-range communication systems. In addition to his work on loss rates, Dr. Kao also demonstrated the need for a purer form of glass to help reduce light loss.

In the summer of 1970, a team of researchers at Corning Glass Works began experimenting with a new material called fused silica. This substance was known for its extreme purity, high melting point, and low refractive index.

The team, composed of Robert Maurer, Donald Keck, and Peter Schultz, quickly realized that fused silica could be used to create a new type of wire known as an “ optical waveguide fiber .” This fiber-optic wire could carry 65,000 times more information than traditional copper wires. Furthermore, the light waves used to carry information could be decoded at a destination even a thousand miles away.

This invention revolutionized long-distance communication and paved the way for today’s fiber-optic technology. The team solved the decibel-loss problem defined by Dr. Kao, and in 1973, John MacChesney improved the chemical vapor-deposition process for fiber production at Bell Labs. As a result, commercial production of fiber-optic cable became possible.

In April 1977, General Telephone and Electronics utilized a fiber-optic network to conduct live phone traffic in Long Beach, California, for the first time. Bell Labs soon followed suit in May 1977 with an optical telephone communication system in the downtown Chicago area that spanned 1.5 miles. Each optical-fiber pair could transmit 672 voice channels, equivalent to a DS3 circuit.

In the early 1980s, the second generation of fiber-optic communication was designed for business use and used 1.3-micron InGaAsP semiconductor lasers. These systems were operational at bit rates of up to 1.7 Gbps in 1987, with repeater spacing of up to 50 kilometers.

The third generation of fiber-optic networks utilized systems that operated at 1.55 microns and had losses of about 0.2 dB per kilometer.

The fourth generation of fiber-optic communication systems relied on optical amplification to reduce the number of repeaters needed and wavelength-division multiplexing (WDM) to increase data capacity.

In 2006, a bit rate of 14 terabits (Tb) per second was reached over a single 160-kilometer line using optical amplifiers. As of 2021, Japanese scientists were able to transmit 319 Tbps over 3,000 kilometers with four-core fiber cables.

Although the capacity of these fourth-generation fiber-optic communication systems is much greater than that of the earlier generations, the basic principles remain the same: electrical signals are converted into light pulses that are sent through optical fibers and then converted back into electrical signals at the receiving end.

With each generation, however, the component parts have become smaller, more reliable, and less expensive. As a result, fiber-optic communication has become an increasingly important part of our global telecommunications infrastructure.

Also see: Best Network Virtualization Software & Products 

The optical network edge is where traffic enters and exits the network. In order to meet the needs of cloud-based applications, optical networks are moving closer to the end user. This allows for lower latency and more consistent performance.

As cyberattacks become more common, data protection in motion will continue to be a major concern. SASE (secure access service edge), which uses cloud-native security functions at service endpoints, has been getting more attention lately. End-point protection might render security controls on the connectivity network unnecessary.

Although this may not eliminate the need for encryption, it will secure sensitive data and applications. Without a single security control in place, protection at Layer 1 becomes increasingly tricky.

We can better protect our resources by encrypting control, management, and user traffic. This makes it virtually impossible for hackers to penetrate the system, significantly lowering the chance of a successful cyber attack. As businesses rely more on data and connectivity, robust security solutions will only become more pronounced.

Open optical networking is a type of optical network that uses standard, open interfaces to allow for the integration of different vendors’ equipment. This allows for more choice and flexibility when it comes to optical networking components. In addition, it makes it easier to add new capabilities and services as they become available.

As data traffic continues to grow, there is an increasing demand for higher bandwidth and capacity. Optical spectrum services provide this by using the optical spectrum to increase the capacity of existing fiber-optic networks. These services are becoming more popular as they offer a cost-effective way to meet the ever-growing demand for data.

Outdoor deployments in street cabinets are becoming more common as the need for higher bandwidth and capacity grows. Outdoor optical fibers can be run directly to user locations, providing a more direct connection and lower latency.

As optical networks continue to evolve, the need for smaller, more compact components is becoming more pronounced. This is because space is often limited in data center environments. Compact modular optical components offer a way to save space while still providing high performance.

Intelligent optical networks are optical networks that use artificial intelligence (AI) to optimize performance. AI can be used to identify and correct problems in the network automatically. This allows for a more efficient and reliable network.

In addition, AI can be used to predict future traffic patterns and demand. This information can be used to provision capacity in advance, ensuring the network can meet future demands.

Flexible grid architectures are becoming more popular as they offer a way to increase the capacity of existing optical fibers. Flexible grids allow for the multiplexing of different wavelengths of light on a single fiber. This enables more data to be carried on each fiber, increasing the network’s capacity.

Wavelength-division multiplexing is a technology that allows for multiple wavelengths of light to be carried on a single optical fiber. WDM on demand is a type of WDM that allows for the provisioning of capacity on demand. This means that capacity can be added as needed, without the need to install new optical fibers.

Optical networking has come a long way in its relatively short history. From humble beginnings, it is now an essential part of the infrastructure for many large networks. It is a critical backbone of the internet, has revolutionized how we communicate, and has ushered in an era of unprecedented technological progress.

With trends such as 5G coming into maturity, optical networking looks poised to continue playing a vital role in our increasingly digital world.

Also see: Best Open Source Network Monitoring Tools 

Enterprise Networking Planet aims to educate and assist IT administrators in building strong network infrastructures for their enterprise companies. Enterprise Networking Planet contributors write about relevant and useful topics on the cutting edge of enterprise networking based on years of personal experience in the field.

Advertise with TechnologyAdvice on Enterprise Networking Planet and our other IT-focused platforms.

Property of TechnologyAdvice. © 2022 TechnologyAdvice. All Rights Reserved Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.